Problem
Question
You are given two arrays
rowSumandcolSumof non-negative integers whererowSum[i]is the sum of the elements in theithrow andcolSum[j]is the sum of the elements of thejthcolumn of a 2D matrix. In other words, you do not know the elements of the matrix, but you do know the sums of each row and column.
Find any matrix of non-negative integers of size rowSum.length x colSum.length that satisfies the rowSum and colSum requirements.
Return a 2D array representing any matrix that fulfills the requirements. It’s guaranteed that at least one matrix that fulfills the requirements exists.
Example 1:
Input: rowSum = [3,8], colSum = [4,7] Output: [[3,0], [1,7]] Explanation: 0th row: 3 + 0 = 3 == rowSum[0] 1st row: 1 + 7 = 8 == rowSum[1] 0th column: 3 + 1 = 4 == colSum[0] 1st column: 0 + 7 = 7 == colSum[1] The row and column sums match, and all matrix elements are non-negative. Another possible matrix is: [[1,2], [3,5]]
Example 2:
Input: rowSum = [5,7,10], colSum = [8,6,8] Output: [[0,5,0], [6,1,0], [2,0,8]]
Constraints:
1 <= rowSum.length, colSum.length <= 5000 <= rowSum[i], colSum[i] <= 108sum(rowSum) == sum(colSum)
Solutions
🏆 Greedy
Time Complexity: | Space Complexity:
The solution is simpler than we might thing at the beginning. The solution uses a greedy approach to construct the required matrix. We start by initializing a matrix of zeros with dimensions derived from rowSum and colSum. For each cell (i, j), we assign the minimum value between the remaining sum of the i-th row and the j-th column. After assigning a value to the cell, we update the rowSum and colSum accordingly by subtracting the assigned value.
class Solution:
def restoreMatrix(self, rowSum: List[int], colSum: List[int]) -> List[List[int]]:
matrix = [[0] * len(colSum) for _ in range(len(rowSum))]
for i in range(len(rowSum)):
for j in range(len(colSum)):
matrix[i][j] = min(rowSum[i], colSum[j])
rowSum[i] -= matrix[i][j]
colSum[j] -= matrix[i][j]
return matrix
Before arriving at the solution above, I experimented with more complex approaches. Initially, I theorized that we should start filling the matrix by prioritizing the largest sums in colSum and rowSum. Besides, I created additional dictionaries to keep track of colSum and rowSum changes.
- Version 1:
class Solution:
def restoreMatrix(self, rowSum: List[int], colSum: List[int]) -> List[List[int]]:
heap = []
memoColSum = defaultdict(int)
memoRowSum = defaultdict(int)
for rowIn, row in enumerate(rowSum):
for colIn, col in enumerate(colSum):
heapq.heappush(heap, (-(row + col), (row, col), (rowIn, colIn)))
matrix = [[0 for _ in range(len(colSum))] for _ in range(len(rowSum))]
while heap:
_, (rowSum, colSum), (row, col) = heapq.heappop(heap)
value = min(rowSum - memoRowSum[row], colSum - memoColSum[col])
value = max(0, value)
matrix[row][col] = value
memoColSum[col] += value
memoRowSum[row] += value
return matrix
Version 2:
class Solution:
def restoreMatrix(self, rowSum: List[int], colSum: List[int]) -> List[List[int]]:
array = []
memoColSum = defaultdict(int)
memoRowSum = defaultdict(int)
for rowIn, row in enumerate(rowSum):
for colIn, col in enumerate(colSum):
array.append((row, col, rowIn, colIn))
matrix = [[0 for _ in range(len(colSum))] for _ in range(len(rowSum))]
for rowSum, colSum, row, col in array:
value = min(rowSum - memoRowSum[row], colSum - memoColSum[col])
matrix[row][col] = value
memoColSum[col] += value
memoRowSum[row] += value
return matrix
- Version 3:
class Solution:
def restoreMatrix(self, rowSum: List[int], colSum: List[int]) -> List[List[int]]:
matrix = [[0] * len(colSum) for _ in range(len(rowSum))]
for i in range(len(rowSum)):
for j in range(len(colSum)):
matrix[i][j] = min(rowSum[i], colSum[j])
rowSum[i] -= matrix[i][j]
colSum[j] -= matrix[i][j]
return matrix